71 research outputs found

    Improved Distance Sensitivity Oracles with Subcubic Preprocessing Time

    Get PDF
    We consider the problem of building Distance Sensitivity Oracles (DSOs). Given a directed graph G=(V,E)G=(V, E) with edge weights in {1,2,,M}\{1, 2, \dots, M\}, we need to preprocess it into a data structure, and answer the following queries: given vertices u,vVu,v\in V and a failed vertex or edge f(VE)f\in (V\cup E), output the length of the shortest path from uu to vv that does not go through ff. Our main result is a simple DSO with O~(n2.7233M)\tilde{O}(n^{2.7233}M) preprocessing time and O(1)O(1) query time. Moreover, if the input graph is undirected, the preprocessing time can be improved to O~(n2.6865M)\tilde{O}(n^{2.6865}M). The preprocessing algorithm is randomized with correct probability 11/nC\ge 1-1/n^C, for a constant CC that can be made arbitrarily large. Previously, there is a DSO with O~(n2.8729M)\tilde{O}(n^{2.8729}M) preprocessing time and polylog(n)\operatorname{polylog}(n) query time [Chechik and Cohen, STOC'20]. At the core of our DSO is the following observation from [Bernstein and Karger, STOC'09]: if there is a DSO with preprocessing time PP and query time QQ, then we can construct a DSO with preprocessing time P+O~(n2)QP+\tilde{O}(n^2)\cdot Q and query time O(1)O(1). (Here O~()\tilde{O}(\cdot) hides polylog(n)\operatorname{polylog}(n) factors.)Comment: To appear in ESA'2

    Hardness of KT Characterizes Parallel Cryptography

    Get PDF
    A recent breakthrough of Liu and Pass (FOCS'20) shows that one-way functions exist if and only if the (polynomial-)time-bounded Kolmogorov complexity, K^t, is bounded-error hard on average to compute. In this paper, we strengthen this result and extend it to other complexity measures: - We show, perhaps surprisingly, that the KT complexity is bounded-error average-case hard if and only if there exist one-way functions in constant parallel time (i.e. NC⁰). This result crucially relies on the idea of randomized encodings. Previously, a seminal work of Applebaum, Ishai, and Kushilevitz (FOCS'04; SICOMP'06) used the same idea to show that NC⁰-computable one-way functions exist if and only if logspace-computable one-way functions exist. - Inspired by the above result, we present randomized average-case reductions among the NC¹-versions and logspace-versions of K^t complexity, and the KT complexity. Our reductions preserve both bounded-error average-case hardness and zero-error average-case hardness. To the best of our knowledge, this is the first reduction between the KT complexity and a variant of K^t complexity. - We prove tight connections between the hardness of K^t complexity and the hardness of (the hardest) one-way functions. In analogy with the Exponential-Time Hypothesis and its variants, we define and motivate the Perebor Hypotheses for complexity measures such as K^t and KT. We show that a Strong Perebor Hypothesis for K^t implies the existence of (weak) one-way functions of near-optimal hardness 2^{n-o(n)}. To the best of our knowledge, this is the first construction of one-way functions of near-optimal hardness based on a natural complexity assumption about a search problem. - We show that a Weak Perebor Hypothesis for MCSP implies the existence of one-way functions, and establish a partial converse. This is the first unconditional construction of one-way functions from the hardness of MCSP over a natural distribution. - Finally, we study the average-case hardness of MKtP. We show that it characterizes cryptographic pseudorandomness in one natural regime of parameters, and complexity-theoretic pseudorandomness in another natural regime.</p

    A Relativization Perspective on Meta-Complexity

    Get PDF
    Meta-complexity studies the complexity of computational problems about complexity theory, such as the Minimum Circuit Size Problem (MCSP) and its variants. We show that a relativization barrier applies to many important open questions in meta-complexity. We give relativized worlds where: 1) MCSP can be solved in deterministic polynomial time, but the search version of MCSP cannot be solved in deterministic polynomial time, even approximately. In contrast, Carmosino, Impagliazzo, Kabanets, Kolokolova [CCC'16] gave a randomized approximate search-to-decision reduction for MCSP with a relativizing proof. 2) The complexities of MCSP[2^{n/2}] and MCSP[2^{n/4}] are different, in both worst-case and average-case settings. Thus the complexity of MCSP is not "robust" to the choice of the size function. 3) Levin’s time-bounded Kolmogorov complexity Kt(x) can be approximated to a factor (2+ε) in polynomial time, for any ε > 0. 4) Natural proofs do not exist, and neither do auxiliary-input one-way functions. In contrast, Santhanam [ITCS'20] gave a relativizing proof that the non-existence of natural proofs implies the existence of one-way functions under a conjecture about optimal hitting sets. 5) DistNP does not reduce to GapMINKT by a family of "robust" reductions. This presents a technical barrier for solving a question of Hirahara [FOCS'20]

    Approximating All-Pair Bounded-Leg Shortest Path and APSP-AF in Truly-Subcubic Time

    Get PDF
    In the bounded-leg shortest path (BLSP) problem, we are given a weighted graph G with nonnegative edge lengths, and we want to answer queries of the form "what\u27s the shortest path from u to v, where only edges of length = f are considered. In this article we give an O~(n^{(omega+3)/2}epsilon^{-3/2}log W) time algorithm to compute a data structure that answers APSP-AF queries in O(log(epsilon^{-1}log (nW))) time and achieves (1+epsilon)-approximation, where omega < 2.373 is the exponent of time complexity of matrix multiplication, W is the upper bound of integer edge lengths, and n is the number of vertices. This is the first truly-subcubic time algorithm for these problems on dense graphs. Our algorithm utilizes the O(n^{(omega+3)/2}) time max-min product algorithm [Duan and Pettie 2009]. Since the all-pair bottleneck path (APBP) problem, which is equivalent to max-min product, can be seen as all-pair reachability for all flow, our approach indeed shows that these problems are almost equivalent in the approximation sense

    Constructing a Distance Sensitivity Oracle in O(n^2.5794 M) Time

    Get PDF
    M}, in O(n^2.5286 M) time. This algorithm is crucial in the preprocessing algorithm of our DSO. Our solution improves the O(n^2.6865 M) time bound in [Ren, ESA 2020], and matches the current best time bound for computing all-pairs shortest paths

    Bounded Relativization

    Get PDF
    Relativization is one of the most fundamental concepts in complexity theory, which explains the difficulty of resolving major open problems. In this paper, we propose a weaker notion of relativization called bounded relativization. For a complexity class ?, we say that a statement is ?-relativizing if the statement holds relative to every oracle ? ? ?. It is easy to see that every result that relativizes also ?-relativizes for every complexity class ?. On the other hand, we observe that many non-relativizing results, such as IP = PSPACE, are in fact PSPACE-relativizing. First, we use the idea of bounded relativization to obtain new lower bound results, including the following nearly maximum circuit lower bound: for every constant ? > 0, BPE^{MCSP}/2^{?n} ? SIZE[2?/n]. We prove this by PSPACE-relativizing the recent pseudodeterministic pseudorandom generator by Lu, Oliveira, and Santhanam (STOC 2021). Next, we study the limitations of PSPACE-relativizing proof techniques, and show that a seemingly minor improvement over the known results using PSPACE-relativizing techniques would imply a breakthrough separation NP ? L. For example: - Impagliazzo and Wigderson (JCSS 2001) proved that if EXP ? BPP, then BPP admits infinitely-often subexponential-time heuristic derandomization. We show that their result is PSPACE-relativizing, and that improving it to worst-case derandomization using PSPACE-relativizing techniques implies NP ? L. - Oliveira and Santhanam (STOC 2017) recently proved that every dense subset in P admits an infinitely-often subexponential-time pseudodeterministic construction, which we observe is PSPACE-relativizing. Improving this to almost-everywhere (pseudodeterministic) or (infinitely-often) deterministic constructions by PSPACE-relativizing techniques implies NP ? L. - Santhanam (SICOMP 2009) proved that pr-MA does not have fixed polynomial-size circuits. This lower bound can be shown PSPACE-relativizing, and we show that improving it to an almost-everywhere lower bound using PSPACE-relativizing techniques implies NP ? L. In fact, we show that if we can use PSPACE-relativizing techniques to obtain the above-mentioned improvements, then PSPACE ? EXPH. We obtain our barrier results by constructing suitable oracles computable in EXPH relative to which these improvements are impossible

    Effects of Graphite Additions on Microstructures and Wear Resistance of Fe-Cr-C-Nb Hardfacing Alloys

    Get PDF
    Hardfacing alloys with different carbon contents by changing graphite additions in flux-cored wires were prepared on a surface of steel C45E4 (ISO 683) using open-arc overlaying. Testing was conducted using scanning electron microscopy (SEM), transmission electron microscopy (TEM), X-ray diffraction (XRD), a Rockwell hardness tester and an abrasion tester to study the effect of variation of graphite additions on the microstructures, hardness and abrasive resistance of the hardfacing alloys. The results show that the microstructures of the hardfacing alloys consisted of ferrite, martensite, retained austenite, independent austenite and NbC particles. Firstly, as graphite additions increased, the carbon contents gradually increased and the microstructures of the hardfacing alloys changed from ferrite plus NbC particles to martensite with retained austenite and larger NbC particles, which was accompanied by hardness increasing and better abrasive resistance. And then the hardfacing layer alloy best performance was obtained as graphite addition was 60 g. The highest hardness was 61.8 HRC and the wear resistance was nearly four times as high as that of the base metal. But excessive graphite additions resulted in some independent austenite present in the microstructures of the hardfacing alloys together with martensite plus retained austenite and NbC particles, which deteriorated the performance of the hardfacing alloys

    Polynomial-Time Pseudodeterministic Construction of Primes

    Full text link
    A randomized algorithm for a search problem is *pseudodeterministic* if it produces a fixed canonical solution to the search problem with high probability. In their seminal work on the topic, Gat and Goldwasser posed as their main open problem whether prime numbers can be pseudodeterministically constructed in polynomial time. We provide a positive solution to this question in the infinitely-often regime. In more detail, we give an *unconditional* polynomial-time randomized algorithm BB such that, for infinitely many values of nn, B(1n)B(1^n) outputs a canonical nn-bit prime pnp_n with high probability. More generally, we prove that for every dense property QQ of strings that can be decided in polynomial time, there is an infinitely-often pseudodeterministic polynomial-time construction of strings satisfying QQ. This improves upon a subexponential-time construction of Oliveira and Santhanam. Our construction uses several new ideas, including a novel bootstrapping technique for pseudodeterministic constructions, and a quantitative optimization of the uniform hardness-randomness framework of Chen and Tell, using a variant of the Shaltiel--Umans generator

    NP-Hardness of Approximating Meta-Complexity: A Cryptographic Approach

    Get PDF
    It is a long-standing open problem whether the Minimum Circuit Size Problem (MCSP\mathrm{MCSP}) and related meta-complexity problems are NP-complete. Even for the rare cases where the NP-hardness of meta-complexity problems are known, we only know very weak hardness of approximation. In this work, we prove NP-hardness of approximating meta-complexity with nearly-optimal approximation gaps. Our key idea is to use *cryptographic constructions* in our reductions, where the security of the cryptographic construction implies the correctness of the reduction. We present both conditional and unconditional hardness of approximation results as follows. \bullet Assuming subexponentially-secure witness encryption exists, we prove essentially optimal NP-hardness of approximating conditional time-bounded Kolmogorov complexity (Kt(xy)\mathrm{K}^t(x \mid y)) in the regime where tyt \gg |y|. Previously, the best hardness of approximation known was a x1/poly(loglogx)|x|^{1/ \mathrm{poly}(\log \log |x|)} factor and only in the sublinear regime (tyt \ll |y|). \bullet Unconditionally, we show near-optimal NP-hardness of approximation for the Minimum Oracle Circuit Size Problem (MOCSP), where Yes instances have circuit complexity at most 2εn2^{\varepsilon n}, and No instances are essentially as hard as random truth tables. Our reduction builds on a witness encryption construction proposed by Garg, Gentry, Sahai, and Waters (STOC\u2713). Previously, it was unknown whether it is NP-hard to distinguish between oracle circuit complexity ss versus 10slogN10s\log N. \bullet Finally, we define a multi-valued version of MCSP\mathrm{MCSP}, called mvMCSP\mathrm{mvMCSP}, and show that w.p. 11 over a random oracle OO, mvMCSPO\mathrm{mvMCSP}^O is NP-hard to approximate under quasi-polynomial-time reductions with OO oracle access. Intriguingly, this result follows almost directly from the security of Micali\u27s CS proofs (Micali, SICOMP\u2700). In conclusion, we give three results convincingly demonstrating the power of cryptographic techniques in proving NP-hardness of approximating meta-complexity
    corecore